Remote sensing imagery is frequently degraded by dense haze and thin clouds, which hinder accurate Earth observation and downstream analysis. In this paper, we propose a densely connected U-Net-based deep learning architecture tailored for haze and cloud removal in satellite images. The proposed model is trained using available RS-Haze datasets and transfer learning was performed in Cartosat-2E MX, indian Satellite data. The model outcome was evaluated using a combined SSIM and MSE loss to preserve structural integrity and pixel- level detail. After training for 200 epochs, the model demonstrates strong generalization across varying atmospheric conditions, effectively removing dense haze while preserving critical land features such as river boundaries, urban layouts, and agricultural zones. Quantitative results confirm improvements in PSNR and SSIM over existing baselines, and qualitative assessments further validate the model’s capability to enhance image clarity for remote sensing applications. The proposed approach offers a promising tool for improving the image clarity for geospatial analytics in hazy environments.
Introduction
Satellite imagery is crucial for Earth observation tasks like environmental monitoring and disaster response but often suffers from quality degradation due to atmospheric effects such as haze and thin clouds. Traditional haze removal methods, based on physical models like dark channel prior, struggle with dense haze and complex backgrounds. Recent deep learning approaches show promise but often do not generalize well to satellite images due to differences in texture and spectral properties.
To address this, the authors propose a densely connected U-Net architecture tailored for removing haze and thin clouds from satellite imagery. The model uses dense blocks for improved feature reuse and gradient flow, and a hybrid loss function combining mean squared error (MSE) and structural similarity index (SSIM) to balance pixel accuracy with perceptual quality.
The model was trained and evaluated on the RS-Haze benchmark and an Indian satellite dataset (Cartosat-2S MX). Results show it outperforms conventional and recent deep learning models both quantitatively (higher PSNR and SSIM) and qualitatively, effectively restoring fine spatial details while removing atmospheric interference. Ablation studies confirm the benefits of dense connections and combined loss functions.
Limitations include challenges with extremely dense clouds causing over-smoothing and some loss of fine spectral details. Future work aims to extend the approach to multispectral and hyperspectral satellite data for improved atmospheric correction.
Conclusion
We have presented a densely connected U-Net model for haze and thin cloud removal in satellite imagery trained jointly on the RS-Haze benchmark dataset and our proprietary ISRO SAC dataset. The U-Net is trained with a loss function composed of MSE and SSIM which allows the model to learn to restore pixel fidelity and preserve structure during the haze removal process, and we have achieved strong quantitative results as well as promising qualitative performance on several challenging pieces of remote sensing data. Our experimental results have demonstrated the benefits of densely-connected networks to facilitate reuse of spatial features while preserving important spatial details, and we outperformed both the baseline U-Net model and the AOD- Net model. Our qualitative results showed that the model can effectively remove dense haze and streaks of thin cloud while preserving the architectural layouts of urban landscapes, the configuration of rivers, and boundaries of agricultural plots - all representative of data that NASA, ESA, and ISRO present as part of their remote sensing workflows.
Future work will focus on adapting the previous approach for multispectral and hyperspectral data, with the hope that the spectral information available in these special image for- mats will provide even more accurate atmospheric correction capabilities.
References
[1] B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “AOD-Net: All-in-one dehazing network,” in ICCV, 2017.
[2] B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “Single image haze removal using a generative adversarial network,” IEEE Transactions on Image Processing, vol. 28, no. 11, pp. 5667–5679, 2019.
[3] W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.-H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in ECCV, 2016.
[4] K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341–2353, 2011.
[5] B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “DehazeNet: An end-to-end system for single image haze removal,” IEEE Transactions on Image Processing, vol. 25, no. 11, pp. 5187–5198, 2016.
[6] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional net- works for biomedical image segmentation,” in MICCAI, 2015.
[7] X. Li, Y. Xu, X. Zhang, S. Wang, H. Wu, and J. Zhao, “RS-Haze: A benchmark for haze removal in remote sensing images,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 189, pp. 280–294, 2022.
[8] H. Zhang and V. M. Patel, ”Densely connected pyramid dehazing network,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2018, pp. 3194–3203.
[9] X. Qin, Z. Wang, Y. Bai, X. Xie, and H. Jia, ”FFA-Net: Feature fusion attention network for single image dehazing,” in Proc. AAAI Conf. Artif. Intell., vol. 34, no. 07, 2020, pp. 11908–11915.
[10] Y. Li, S. Anwar, and F. Porikli, ”Underwater image enhancement via medium transmission-guided multi-color space embedding,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 2021, pp. 3176–3186.